A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

llm
research paper
Author

Santosh Sawant

Published

January 4, 2024

The paper provides a comprehensive taxonomy categorizing over 32 techniques for mitigating hallucinations in large language models (LLMs). It groups the techniques into categories such as prompt engineering, self-refinement through feedback and reasoning, prompt tuning, and model development. Key mitigation techniques highlighted include:

Paper : https://arxiv.org/pdf/2401.01313.pdf